AI tools are growing in popularity at enterprises, but not all of them are approved by employers – and that’s a serious problem for IT and security leaders
Almost half of UK workers admit to using non-approved AI tools without their employer’s knowledge, according to new research.
The use of shadow AI, described in the report as ‘bring your own AI’ (BYO-AI), is contributing to overall AI use, with more than two-thirds of employees now using AI tools regularly at work.
According to data from Owl Labs’ annual State of Hybrid Work Report, it’s mostly younger age groups that are behind the shadow AI trend, with 63% of Gen Z and Millennial employees – those aged between 18 and 43 – using AI for their work at least once a week.
The same is true of only 43% of Gen X and Boomer team members, aged 44 and over.
“As hybrid work becomes the norm, workforces are shifting from merely experimenting with AI tools to actively using them to boost productivity and efficiency in their everyday work,” said Frank Weishaupt, CEO of Owl Labs.
“This transformation calls for a smart, coordinated approach in which business leaders ensure that AI integration is both purposeful and secure, with safeguards that mitigate risks associated with unchecked use,” he added.
“Employees will need the skills and confidence to work effectively with these tools, whether at home or in the office, to foster a productive and resilient hybrid workforce.”
The growth of shadow AI at enterprises has been a recurring talking point over the last year, with IT and security leaders alike voicing repeated concerns over the issue.
A recent study from Deloitte, for example, revealed that nearly one-third of workers are paying for and using AI tools that haven’t been authorized by their employer.
Almost one-in-five reckon that ‘a great deal’ of employees in the UK are using generative AI without their employer’s explicit approval, and another 45% reckon it’s ‘a fair amount’.
When asked about their reasons for using these applications, nearly half (40%) of employees said they do not see any risk. Around one-third also doubted whether the company can check if they’re using non-approved apps.
But there are significant risks for both individuals and enterprises alike, according to analysis from BCS, the Chartered Institute for IT.
Staff using non-approved tools risk breaching data privacy rules, expose themselves to potential security vulnerabilities, and even fall foul of intellectual property rights.
“Organizations can navigate these challenges by establishing robust governance frameworks, involving senior management, regularly assessing risks, educating employees and leveraging technologies,” advised BCS fellow Dr T.W. Kwan.
“As AI continues to permeate the workplace, a proactive and adaptive governance approach is crucial for harnessing its potential safely and responsibly.”
Source link